Designing an AI Centre of Excellence That Actually Accelerates
AI initiative failure rates sit between 70 and 85 percent. The uncomfortable finding from organizations that have studied their own failures is that most AI Centres of Excellence become the bottleneck they were designed to prevent. The same structure that was supposed to accelerate AI adoption, by centralizing expertise, standardizing governance, and providing a coordinating function for the organization's AI program, ends up slowing the work down, creating approval queues, and generating documentation that business teams route around rather than through.
The problem is almost always the operating model, the governance design, and the mandate. An AI CoE built as an IT governance function with vague authority and no accountability for business outcomes will produce governance artifacts, not business results. An AI CoE built as a cross-functional accelerator with clear mandate, defined authority, and measurable success criteria will do what it was created to do: convert scattered AI experiments into a coherent, governed, scalable enterprise capability.
The distinction between these two outcomes is not a matter of talent or investment. It is a matter of design decisions that most organizations make badly because they model the AI CoE on prior technology governance functions rather than on what the AI transformation challenge actually requires.
What an AI CoE Is Actually For
The clearest definition comes from BuildAIQ's May 2026 framework: an AI CoE is the operating system for turning AI from random experiments into governed, measurable business capability. That definition is useful because it specifies both the input, random experiments, and the output, governed, measurable business capability, and makes the CoE the mechanism that transforms one into the other.
The function has three distinct dimensions that need to work simultaneously. The CoE is an accelerator: it removes friction from AI development and deployment by providing shared platforms, reusable components, standard tooling, and expertise that individual business teams do not have to rebuild from scratch. It is a governor: it sets the standards, policies, and risk controls that ensure AI systems are built safely, ethically, and in compliance with applicable regulations. And it is a translator: it connects AI capability to business outcomes, ensuring that AI initiatives are grounded in specific business problems with measurable returns rather than in technology enthusiasm without organizational purpose.
The failure mode that produces CoE bottlenecks is overweighting the governance dimension at the expense of the acceleration dimension. A CoE that is primarily a review and approval body is a governance body, not a CoE. It adds process without adding capability, and business teams learn quickly that the fastest path to getting AI work done is to avoid engaging with it.
The Three Operating Model Choices
The structure of an AI CoE determines what it can and cannot do, and the choice between the three primary structural models should be driven by the organization's size, AI maturity, risk profile, and the degree of domain specialization across its business units.
Centralized
A centralized CoE owns all AI development capability. Business units submit use cases, the CoE builds and deploys the solutions, and the CoE maintains the systems in production. This model produces consistent quality, strong governance, and efficient use of scarce AI expertise. It also produces the bottleneck failure mode when demand for AI capability exceeds the CoE's capacity to deliver, which it almost always does as the organization's AI ambitions grow.
Centralized models work best in the early stages of AI adoption, when the organization has limited AI talent and benefits from concentrating it, when the use case portfolio is narrow enough that a central team can serve it without becoming overwhelmed, and when governance risk is high enough that centralized control of AI development is a genuine requirement rather than an organizational preference.
Federated
A federated model distributes AI development capability across business units, with each unit owning its own AI team and operating largely independently. The CoE function is reduced to setting minimum standards, providing a common platform, and facilitating knowledge sharing. This model maximizes business unit agility and domain relevance but risks fragmentation: different units building on incompatible foundations, duplicating effort, creating governance gaps, and producing an AI portfolio that cannot be managed or measured coherently at the enterprise level.
Federated models work best in large, highly diversified organizations where business unit domains are genuinely distinct, where the risk of a central bottleneck outweighs the risk of fragmentation, and where the organization has sufficient AI maturity across its business units to operate responsibly without strong central oversight.
Hub and Spoke
The hub-and-spoke model is the dominant best practice for 2026, identified consistently across practitioner analyses as the structure that best balances acceleration with governance. The CoE functions as the central hub: it owns the shared platform, the governance standards, the reusable components, the tooling standards, and the enterprise-level measurement. Business units function as spokes: they own their AI use cases, develop solutions using the platform and standards the hub provides, and are accountable for the business outcomes their AI initiatives produce.
The hub does not own the spokes' work. It enables it. The spoke teams do not operate independently of the hub's standards. They operate within them. That division of accountability is what allows the CoE to scale: the hub's capacity is not the constraint on how many AI initiatives the organization can run, because the spokes do the development work. The hub's job is to make that development work faster, safer, and more consistent than it would be if each spoke was operating without a shared foundation.
The Six Capabilities a CoE Must Have to Function
Regardless of which structural model an organization chooses, a functioning AI CoE requires six operational capabilities. Organizations that launch a CoE without all six will discover the missing ones through the specific failure modes they produce.
| Capability | What It Does | Failure Mode Without It |
|---|---|---|
| Use case intake and prioritization | Structured process for evaluating AI use case requests against strategic alignment, data readiness, feasibility, and expected return | CoE becomes reactive, chasing whoever shouts loudest rather than funding the highest-return initiatives |
| Shared platform and tooling standards | Common development environment, approved model registry, deployment infrastructure, and monitoring tooling that all AI initiatives use | Each team builds on a different stack, creating incompatible systems that cannot be governed, monitored, or maintained consistently |
| Governance and risk framework | Defined policies for data use, model risk, vendor selection, human review requirements, and regulatory compliance, with enforcement mechanisms | AI systems deployed without consistent risk assessment, creating regulatory exposure and reputational risk that surfaces at the worst possible time |
| Enablement and training | Role-specific AI literacy programs, prompt engineering guides, workflow playbooks, and office hours that build capability across the organization rather than concentrating it in the CoE | CoE becomes a permanent dependency rather than a capacity builder, limiting how fast AI adoption can scale |
| Measurement and value tracking | Defined metrics for each AI initiative connected to business outcomes, tracked on a regular cadence and reported at the portfolio level | CoE cannot demonstrate its value, cannot identify which initiatives warrant additional investment, and loses budget credibility at the first review cycle |
| Production operations and model management | Defined process for monitoring deployed models, detecting performance degradation, managing model updates, and decommissioning systems that no longer deliver value | Models deployed and forgotten, degrading silently until a user notices the outputs are wrong and trust in the program collapses |
The Mandate Question: What the CoE Owns and What It Does Not
The mandate confusion that produces bottleneck CoEs is almost always a failure to separate policy authority from delivery authority. When the CoE is simultaneously responsible for setting AI standards and delivering AI solutions, it becomes the approval body and the delivery team at the same time. Every use case requires both CoE approval and CoE capacity. When CoE capacity is limited, both approval and delivery queue up behind it.
The mandate that works separates these clearly. The CoE owns policy: what standards AI systems must meet, what governance processes must be followed, what the approved technology stack consists of, and what the measurement requirements are. The CoE does not own delivery: individual AI initiatives are owned by the business functions they serve, built by development teams that may be within those functions or may be resourced through the CoE in a consulting capacity but are accountable to the business function for the outcome.
This separation means the CoE can scale its governance function independently of its delivery capacity. As the organization launches more AI initiatives, the governance standards apply to all of them without requiring CoE review of each one. The CoE reviews categories of risk rather than individual projects, audits compliance with standards rather than approving each deployment, and provides escalation support for genuinely novel or high-risk cases rather than sitting in the critical path of routine development.
The Roles That Actually Need to Be in the CoE
A common failure in AI CoE design is staffing the function exclusively with technical roles. Data scientists, ML engineers, and AI architects are necessary. They are not sufficient. The CoE requires a specific set of non-technical roles that most organizations underinvest in precisely because they are harder to justify in a function that is supposed to be about AI technology.
An AI product manager who translates business problems into AI use case specifications and manages the relationship between the CoE and business unit stakeholders is the role most frequently missing and most frequently cited as the gap that causes CoE-business unit friction. Without this role, the CoE and the business units are speaking different languages, and the translation work falls to technical leads who do it poorly and reluctantly.
A responsible AI lead who owns the governance framework, tracks regulatory developments, manages the risk assessment process for new deployments, and reports on the organization's AI risk posture to leadership is the role that becomes critical as AI programs scale and regulatory scrutiny intensifies. EU AI Act enforcement is in effect in 2026. Organizations without a designated owner for AI regulatory compliance are accumulating exposure they have not assessed.
A change management and enablement specialist who designs and delivers the training, communication, and workflow change programs that allow business functions to adopt AI tools effectively is the role that determines whether CoE investments produce organizational behavior change or just technology deployments that nobody uses the way they were intended.
The Sequencing That Works
Building a CoE to full operating capability takes most organizations six to twelve months. The sequence matters because each phase creates prerequisites for the next, and organizations that try to skip phases discover the missing prerequisites through program failures rather than through design.
The first ninety days should establish the charter, the mandate, and the operating model structure. Not the full governance framework. Not the technology platform. The organizational decisions that determine what the CoE is authorized to do, who it reports to, how it relates to the business units it serves, and what it will be measured on. These decisions are harder and more consequential than the technology decisions that follow them, and they need executive involvement that is not available at the same intensity later in the build.
Days ninety to one-hundred-eighty should establish the shared platform, the use case intake process, and the minimum viable governance framework. The platform should be functional enough for the first cohort of use cases. The governance framework should cover the risk categories that the first cohort of use cases represents. Both should be built to evolve rather than to be comprehensive from the start.
The first cohort of use cases should be selected for three characteristics: they are high enough priority to generate organizational visibility for the CoE, they are tractable enough to be completed within the first operating period, and they span enough of the organization to establish the CoE's cross-functional relevance. Successfully delivering the first cohort is the proof point that creates organizational credibility for the CoE's ongoing mandate and budget. Everything the CoE builds before that proof point is infrastructure. The proof point is what makes the infrastructure worth having.
Talk to Us
ClarityArc helps organizations design AI Centres of Excellence with the operating model, mandate, and capability structure that accelerates adoption rather than creating new governance overhead. If you are standing up an AI CoE or trying to fix one that has stalled, we are ready to help you get the design right.
Get in Touch